Since that time, security has gained almost a cult status. Individuals I know who have never had a clue about the subject are suddenly diving for security information. You hear it in restaurants all the time. As you are eating your lunch, the buzz floats overhead: firewall, router, packet filtering, e-mail bombing, hackers, crackers...the list is long indeed. (This book would never have been written if the climate weren't just so.) By now, most people know that the Internet is insecure, but few know exactly why. Not surprisingly, those very same people are concerned, because most of them intend to implement some form of commerce on the Internet. It is within this climate that Internet Voodoo has arisen, conjured by marketeers from the dark chaos that looms over the Net and its commercial future.
Marketing folks capitalize on ignorance--that's a fact. I know resellers today who sell 8MB SIMMs for $180 and get away with it. However, while technical consultants do often overcharge their customers, there is probably no area where this activity is more prominent than in the security field. This should be no surprise; security is an obscure subject. Customers are not in a position to argue about prices, techniques, and so forth because they know nothing about the subject. This is the current climate, which offers unscrupulous individuals a chance to rake in the dough. (And they are, at an alarming rate.)
The purpose of this chapter, then, is to offer advice for individuals and small businesses. I cannot guarantee that this is the best advice, but I can guarantee that it is from experience. Naturally, everyone's experience is different, but I believe that I am reasonably qualified to offer some insight into the subject. That said, let's begin.
The truth is, TCP/IP has been around for a long, long time. For example, as I reported in Chapter 18, "Novell," NetWare had fully functional TCP/IP built into its operating system back in 1991. UNIX has had it for far longer. So there is no real problem here. The knowledge is available out there in the void.
The greater majority of security breaches stem from human error. (That is because crackers with limited knowledge can easily cut deep into systems that are erroneously configured. On more carefully configured networks, 90 percent of these self-proclaimed "super crackers" couldn't get the time of day from their target.)
These human errors generally occur from lack of experience. The techniques to protect an Internet server have not significantly changed over the past few years. If a system administrator or security administrator fails to catch this or that hole, he needs to bone up on his advisories.
So, before you haul off and spend thousands (or even tens of thousands) of dollars on a security consult, there are some things that you should consider. Here are a couple test questions:
NOTE: I will readily admit that some techniques have been improved, largely by the academic community and not so much by commercial vendors. Commercial vendors are usually slightly behind the academic communities, perhaps by a few months or so. Examples of this might include the development of automated tools to screen your system for known security holes. Many of these are written by students or by freelance software developers. These tools certainly streamline the process of checking for holes, but the holes are commonly known to any security administrator worth his salt.
Before you can make an educated choice of a security consultant, you need to be familiar with basic security principles. That's what this chapter is really all about.
So, point one is this: Other than penetration testing, all active, hands-on security procedures should be undertaken at your place of business or wherever the network is located. Do not forward information to a potential consultant over the Internet, do not hire someone sight unseen, and finally, do not contract a consultant whose expertise cannot be in some way verified.
NOTE: As an example, an individual on the East Coast recently posted an article in Usenet requesting bids on a security consult. I contacted that party to discuss the matter, mainly out of curiosity. Within three hours, the party forwarded to me his topology, identifying which machines had firewalls running, what machines were running IP forwarding, and so forth.Granted, this individual was simply looking for bids, but he forwarded this type of sensitive information to me, an individual he had neither seen nor heard of before. Moreover, if he had done more research, he would have determined that my real name was unobtainable from either my e-mail address, my Web page, or even my provider. Were it not for the fact that I was on great terms with my then-current provider, he [the provider] would not even know my name. So, the person on the East Coast forwarded extremely sensitive information to an unknown source--information that could have resulted in the compromise of his network.
If his explanation is that the level of technical expertise required is highly advanced, this is still not a valid reason to let it slide, particularly if there are currently no known solutions to the problem. If there are options, take them. Never assume (or allow a consultant to assume) that because a hole is obscure or difficult to exploit that it is okay to allow that hole to exist.
Only several months ago, it was theorized that a Java applet could not access a client's hard disk drive. That has since been proven to be false. The argument initially supporting the "impossibility" of the task was this: The programming skill required was not typically a level attained by most crackers. That was patently incorrect. Crackers spend many hours trying to determine new holes (or new ways of implementing old ones). With the introduction of new technologies, such as Java and ActiveX, there is no telling how far a cracker could take a certain technique.
Security through obscurity was once a sound philosophy. Many years ago, when the average computer user had little knowledge of his own operating system (let alone knowledge of multiple operating systems), the security-through-obscurity approach tended to work out. Things were more or less managed on a need-to-know basis. The problem with security through obscurity, however, becomes more obvious on closer examination. It breaks down to matters of trust.
In the old days, when security through obscurity was practiced religiously, it required that certain users know information about the system; for example, where passwords were located and what special characters had to be typed at the prompt. It was common, actually, for a machine, upon connection, to issue a rather cryptic prompt. (Perhaps this can be likened to the prompt one might have received as a Delphi user just a few years ago.) This prompt was expecting a series of commands, including the carrier service, the terminal emulation, and so on. Until these variables were entered correctly (with some valid response, of which there were many), nothing would happen. For example, if the wrong string was entered, a simple ? would appear. A hacker coming across such a system would naturally be intrigued, but he could spend many hours (if not weeks) typing in commands that would fail. (Although the command HELP seems to be a pretty universal way to get information on almost any system.)
Things changed when more experienced users began distributing information about systems. As more and more information leaked out, more sophisticated methods of breaching security were developed. For example, it was shortly after the first release of internal procedures in CBI (the Equifax credit-reporting system) that commercial-grade software packages were developed to facilitate a breaking and entering into that famous computerized consumer credit bureau. These efforts finally culminated with the introduction of a tool called CBIHACK that automated most of the effort behind cracking Equifax.
Today, it is common for users to know several operating systems in at least a fleeting way. More importantly, however, information about systems security has been so widely disseminated that at this stage, even those starting their career in cracking know where password files are located, how authentication is accomplished, and so forth. As such, security through obscurity is now no longer available as a valid stance, nor should it be, especially for one insidious element of it--the fact that for it to work at all, humans must be trusted with information. For example, even when this philosophy had some value, one or more individuals with an instant need-to-know might later become liabilities. Disgruntled employees are historically well known to be in this category. As insiders, they would typically know things about a system (procedures, logins, passwords, and so forth). That knowledge made the security inherently flawed from the start.
It is for these reasons that many authentication procedures are now automated. In automated authentication procedures, the human being plays no part. Unfortunately, however, as you will learn in Chapter 28, "Spoofing Attacks," even these automated procedures are now suspect.
In any event, view with suspicion any proposal that a security hole (small though it may be) should be left alone.
If you are a small firm and cannot afford to invest a lot of money in security, you may have to choose more carefully. However, your consultant should meet at least all the following requirements:
It would be good if you could verify that your potential consultant had been involved in monitoring and perhaps plugging an actual breach. Good examples are situations where he may have been involved in an investigation of a criminal trespass or other network violation.
Equally, past experience working for an ISP is always a plus.
Would you let the world walk through the front door of your home? Would you let complete strangers rifle through your drawers, looking for personal documents or financial statements? Of course not. Then why would you let someone do it over a network? The answer is: You wouldn't. The problem is, computers seem relatively benign, so benign that we may forget how powerful their technology really is.
Software vendors want us to rush to the Internet. The more we use the network, the more software they can sell. In this marketing frenzy, they attempt to minimize some fairly serious problems out there. The truth is, the Internet is not secure and will continue to exist in this state of insecurity for some time to come. This is especially so because many of the networking products used in the future will be based on the Microsoft platform.
Admittedly, Microsoft makes some of the finest software in the world. Security, however, has not been its particular area of expertise. Its Internet operating system is going to be NT--that's a fact. That is also where the majority of Microsoft's security efforts are being concentrated, and it has made some significant advances. However, in the more than 20 years that UNIX has been in existence, it has never been completely secure. This is an important point: UNIX is a system that was designed--almost from its beginning--as an operating system for use on the Internet. It was what the Defense Department chose as the platform to develop ARPAnet. The people who designed it are among the most talented (and technically minded) software engineers on the planet. And even after all this, UNIX is not secure. We should expect, then, that Windows NT will take some time to get the bugs out.
So, in closing on this subject, I relate this: Your network is your home. It is worthy of protection, and that protection costs money. Which brings us to the next issue...
While this is true, it does not mean that you can get a homogenous network secured for next to nothing. In most instances, it is not possible for security attributes to simply be cloned or replicated on all workstations within the network. Various security issues may develop. Some of those involve topology, as I have explained in other chapters and will again discuss here.
We know that a network segment is a closed area; almost like a network within itself. We also know that spoofing beyond that network segment is almost impossible. (Almost.) The more network segments your network is divided up into, the more secure your network will be. (Ideally, each machine would be hardwired to a router. This would entirely eliminate the possibility of IP spoofing, but it is obviously cost prohibitive.) Where you make those divisions will depend upon a close assessment of risk, which will be determined between your technical staff and the consultant. For each segment, you will incur further cost, not only for the consultant's services but for the hardware (and possibly for software).
Certainly, even from a practical standpoint, there are immediate problems. First, due largely to the division between the PC and workstation worlds, the security consultants you contract may be unfamiliar with one of more of the platforms within your network, and they may need to call outside help for them. Also, and this is no small consideration, your consultants may ultimately be forced to provide at least a small portion of proprietary code: their own. If this subject crops up, it should be discussed thoroughly. There is a good chance that you can save at least some cost by having these consultants tie together existing security packages, using their own code as the glue. This is not nearly as precarious as it sounds. It may involve nothing more than redirecting the output of log files or other, ongoing processes to plain text (or some other form suitable for scanning by a program on another platform).
The problem with hiring toolsmiths of this sort is that you may find your security dependent upon them. If your local system administrator is not familiar with the code they used, you may have to rely on the consultants to come for second and third visits. To guard against this, you should ensure good communications between your personnel and the security team. This is a bit harder than it seems.
First, you have to recognize at least this: Your system administrator is God on the network. That network is his domain, and he probably takes exceptional pride in maintaining it. (I have seen some extraordinary things done by system administrators--truly commercial-grade applications running, custom interfaces, and so forth.) When an outside team comes to examine your system administrator's backyard, no matter what they say, the experience feels a little intrusive. Diplomacy is really an important factor. Remember: The consultants will leave, but you have to live with your system administrator on a daily basis.
This information should be bound together. (There are copying services that will bind such a folder, such as Kinko's Copies, or perhaps you have in-house facilities that can do this.) Each section should be separated by a tab that identifies that section. Contained within this folder should also be the following items:
The next step may or may not be within your budget, but if it is, I would strongly recommend it. Locate two separate security firms known to have good reputations. (Even if they are in a different state; it doesn't matter.) Ask those firms what it would cost to examine the information and make a recommendation, a kind of mock bid. Included within their summaries should be a report of how such a job would be implemented if they were doing it. This will not only serve as an index for what the probable cost and effort would be, but also may alert you or your system administrator to special issues, issues particular to your precise configuration. That having been done, you can begin your search for a good, local source.
However, if you are determined to provide dedicated access, with a server under your local control, there are some things you can do to greatly increase security. First, if the only box you are placing out on the freeway is a Web server (and you are concerned about that server being cracked), you can use read-only media. This procedure is admittedly more difficult to implement than a live file system (one that is read/write), but the gains you realize in security are immense. Under such a scenario, even if a cracker gains root access, there is very little that he can do. The downside to this, of course, is that dynamic pages cannot be built on-the-fly, but if you are providing an auto-quote generator or some similar facility (perhaps even interfacing with a database), it can still be done.
Really, the key is to enclose all CGI into a restricted area. The CGI programs read the data on the read-only media and generate a resulting page. This is a very secure method of providing technical support, product lists, and prices to clients in the void. Essentially, so long as you back up your CGI, you could have that identical machine up in one hour or less, even if crackers did manage to crash it. This type of arrangement is good for those who are only providing information. It is poor for (and inapplicable to) those seeking to accept information. If you are accepting information, this might involve a combination of secure HTML packages or protocols, where the information received is written to removable, write-one-time media.
The sacrificial host is really the safest choice. This is a host that is expressly out in the open and that you expect to be cracked. Certainly, this is far preferable to having any portion of your internal network connected to the Internet. However, if you also want your local employees or users to be able to access the Net, this is entirely impractical. It can, however, be implemented where you do not expect much access from the inside out, particularly in commerce situations.
A commerce situation is one where you are accepting credit card numbers over a browser interface. Be very careful about how you implement such schemes. Here is why: There are various paths you can take and some of them represent a greater risk than others. Typically, you want to avoid (at any reasonable cost) storing your customers' credit card numbers on any server connected to the network. (You have already seen the controversy that developed after it was learned that Kevin Mitnik had acquired credit card numbers--reportedly 20,000-- from the drives of Netcom.)
Generally, where you are accepting credit card numbers over the Internet, you will also be clearing them over the network. This typically requires the assistance of an outside service. There are various ways that this is implemented, although two techniques dominate that market.
The first problem is this: The algorithms used are now widely disseminated. That is, there are credit card number generators available across the Internet that will resolve numbers to either a state of authenticity or no authenticity. Kids used them for years to circumvent the security of Internet service providers.
So, at the start, individuals could come forward with at least mathematically sound numbers for submission. Thus, simple algorithm credit card validation subjects the accepting party to a significant amount of risk. For example, if this verification is used in the short run but the cards are later subjected to real verification, the interim period comprises the longest time during which the accepting party will lose goods or services as a result of a fraudulent charge. If this period is extended (and the temporary approval of such a credit card number grants the issuer access to ongoing services), then technically, the accepting party is losing money for every day that the credit card is not actually validated.
TIP: One very good example is utilities that exist for unlawfully accessing AOL. These utilities have, embedded within their design, automatic generators that produce a laundry list of card numbers that will be interpreted as valid. When these programs first emerged, the credit card number generators were primitive and available as support utilities. As using generators of this variety became more common, however, these utilities were incorporated into the code of the same application performing the dial-up and sign-on. The utilities would pop up a window list from which the cracker could choose a number. This number would be sent (usually by the SendKeys function in VB) to the registration form of the provider.
Secondly, and perhaps more importantly, storing the numbers on a local drive could prove a fatal option. You are then relying upon the security of your server to protect the data of your clientele. This is not good. If the information is ultimately captured, intercepted, or otherwise obtained, potentially thousands (or even hundreds of thousands) of dollars might be at stake. If there is a subsequent investigation (which there usually is), it will ultimately come out that the seed source for the numbers was your hard disk drives. In other words, after the Secret Service (or other investigating party) has determined that all victims shared only one common denominator (using your service), you will have a problem.
This is especially true if your system administrator fails to detect the breach and the breach is then an ongoing, chronic problem. There is a certain level at which this could raise legal liability for your company. This has not really been tested in the courts, but I feel certain that within the next few years, special legislation will be introduced that will address the problem. The unfortunate part of this is as follows: Such a case would rely heavily on expert testimony. Because this is a gray area (the idea of what "negligent" system administration is, if such a thing can exist), lawyers will be able to harangue ISPs and other Internet services into settling these cases, even if only in an effort to avoid sizable legal bills. By this, I mean that they could "shake down" the target by saying "I will cost you $50,000.00 in legal bills. Is it worth the trouble to defend?" If the target is a large firm, its counsel will laugh this off and proceed to bury the plaintiff's counsel in paperwork and technical jargon. However, if the target is a small firm (perhaps hiring a local defense firm that does not specialize in Internet law), a legal challenge could be enormously expensive and a drain on resources. If you have to choose, try to saddle some third party with the majority of the liability. In other words, don't store those numbers on your drives if you can help it.
The advantages and disadvantages are diverse in this scenario. First, there is the obvious problem that the accepting party is resigned to traveling blind; that is, they will never have the credit card information within their possession. Because of this, disputed claims are a serious headache.
NOTE: There are various methods through which the mechanics of this process are achieved. One is where the credit card clearing company has proprietary software that attaches to a particular port. On both the client and the server end, this port traffics the information (which is encrypted before it leaves the client and decrypted after the arrival at the server). More than likely, the remote server refuses connections on almost all other ports, or the information is filtered through a pinhole in a firewall.
Here's an example: A kid gets his parent's credit card number and charges up a storm. This information is validated by the remote server, with the accepting party storing no information. Later, the parent disputes the transaction, claiming that he never authorized such a charge. This is okay, and may happen periodically. However, obtaining records and then sorting out that dispute is both a logistical and legal problem. It is not quite as simple as disputing unauthorized charges on one's telephone bill. Because the party that cleared (and ultimately collected on) the charge is a third party (one that has no part in the exchange of goods or services), confusion can easily develop.
Imagine now if you were such a victim. You contact the party that is the apparent recipient of the charge, only to find that the company has "nothing to do with it." When consumers are confronted with this type of situation, they become less likely to do commerce over the Net. And while this is essentially no different than being confronted with unauthorized 900- number charges on your telephone bill, the average consumer will view the Internet with increasing suspicion. This is bad for Internet commerce generally. Despite that fact, however, this method is generally regarded as the most secure.
Naturally, there is also the issue of cost. Most clearing companies take a piece of the action, which means that they charge a percentage for each charge cleared. Sometimes there are variations on this theme, but there are basically two scenarios. In the first, they charge a sizable sum for setup and request no further money from the client, instead reaping their percentage from the credit card companies at the other end. Another is where the initial cost is lower, but the client is charged a percentage on each transaction. Still another, although less common, is where the middleman company may take a smaller percentage from both sides, thereby distributing the load and making their pricing seem more competitive to both client and credit card company.
There are many services you can contract, including both consultant firms and actual software and hardware solution vendors. Here are a few:
http://www.cyburban.com/~mmdelzio/first.htm
http://www.luckman.com/wc/webcom.html
http://www.redhead.com/html/makesale.html
http://alphabase.com/ezid/nf/com_intro.html
http://www.mticentral.com/Commerce/
Credit Card Transactions: Real World and Online. Keith Lamond. 1996.
Digital Money Online. A Review of Some Existing Technologies. Dr. Andreas Schöter and Rachel Willmer. Intertrader Ltd. February 1997.Millions of Consumers to Use Internet Banking. Booz, Allen & Hamilton Study Indicates.
A Bibliography of Electronic Payment Information. Electronic Cash, Tokens and Payments in the National Information Infrastructure. Electronic Commerce in the NII. A Framework for Global Electronic Commerce. Clinton Administration. For an executive summary, visit For the complete report, visit Card Europe UK--Background Paper. Smartcard Technology Leading To Multi Service Capability. Electronic Payment Schemes. Dr. Phillip M. Hallam-Baker. World Wide Web Consortium. Generic Extensions of WWW Browsers. Ralf Hauser and Michael Steiner. First Usenix Workshop on Electronic Commerce. July 1995.Anonymous Delivery of Goods in Electronic Commerce. Ralf Hauser and Gene Tsudik. IBMTDB, 39(3), pp. 363-366. March 1996.
On Shopping Incognito. R. Hauser and G. Tsudik. Second Usenix Workshop on Electronic Commerce. November 1996.
The Law of Electronic Commerce. EDI, Fax and Email: Technology, Proof and Liability. B. Wright. Little, Brown and Company. 1991.Fast, Automatic Checking of Security Protocols. D. Kindred and J. M. Wing. Second Usenix Workshop on Electronic Commerce, pp. 41-52. November 1996.
Electronic Commerce on the Internet. Robert Neches, Anna-Lena Neches, Paul Postel, Jay M. Tenenbaum, and Robert Frank. 1994.NetBill Security and Transaction Protocol. Benjamin Cox, J. D. Tygar, and Marvin Sirbu. First Usenix Workshop on Electronic Commerce. July 1995.
CyberCash Credit Card Protocol. Donald E. Eastlake, Brian Boesch, Steve Crocker, and Magdalena Yesil. Version 0.8. July 1995. (Internet Draft.)
Commerce on the Internet--Credit Card Payment Applications over the Internet. Taher Elgamal. July 1995.
Business, Electronic Commerce and Security. B. Israelsohn. 1996.
© Copyright, Macmillan Computer Publishing. All rights reserved.